About the project

I’m a PhD student in Genetics/proteonomics. I’m taking this course to refresh my R skills and learn some new stuff.

Introduction to Open Data Science 2018

Link to Github Repo


R Exercise 2 Analysis

Task 1

Data wrangling exersize is in the data folder.

This dataset contains information about students and their attitude/strategy towards learning and their exam scores.

setwd("~/GitHub/IODS-project/data")
data <- read.table("learning2014.csv", header = TRUE, sep = ",")
head(data)
##   Gender Age Attitude Points     Deep     Surf  Stra
## 1      F  53       37     25 3.583333 2.583333 3.375
## 2      M  55       31     12 2.916667 3.166667 2.750
## 3      F  49       25     24 3.500000 2.250000 3.625
## 4      M  53       35     10 3.500000 2.250000 3.125
## 5      M  49       37     22 3.666667 2.833333 3.625
## 6      F  38       38     21 4.750000 2.416667 3.625
names(data)
## [1] "Gender"   "Age"      "Attitude" "Points"   "Deep"     "Surf"    
## [7] "Stra"
dim(data)
## [1] 183   7

There are 7 variables and 183 people.

Task 2

library(ggplot2)
summary(data)
##  Gender       Age           Attitude         Points           Deep      
##  F:122   Min.   :17.00   Min.   :14.00   Min.   : 0.00   Min.   :1.583  
##  M: 61   1st Qu.:21.00   1st Qu.:26.00   1st Qu.:18.00   1st Qu.:3.333  
##          Median :22.00   Median :32.00   Median :22.00   Median :3.667  
##          Mean   :25.58   Mean   :31.21   Mean   :20.61   Mean   :3.696  
##          3rd Qu.:27.00   3rd Qu.:37.00   3rd Qu.:26.00   3rd Qu.:4.083  
##          Max.   :55.00   Max.   :50.00   Max.   :33.00   Max.   :4.917  
##       Surf            Stra      
##  Min.   :1.583   Min.   :1.250  
##  1st Qu.:2.417   1st Qu.:2.562  
##  Median :2.833   Median :3.125  
##  Mean   :2.792   Mean   :3.085  
##  3rd Qu.:3.167   3rd Qu.:3.625  
##  Max.   :4.333   Max.   :5.000

Summary of all variables. Dataset is about students (Age, Gender) and their attitudes towards learning (Attitude, Deep, Stra, Surf) and their combined scores (Points). Combination stats like Deep/Surf/Stra combine several questions from the questionairre togetger.

Deep is a combination of questionaire questions that reflect “Deep Approach” to learning. (Seeking Meaning, Relating Ideas, Use of Evidence)

Surf is a “Surface Approach” to learning. (Lack of Purpose, Unrelated Memorizing, Syllabus-boundness)

Stra is a “Strategic Approach” to learning. (Organized Studying, Time Management)

I will use the graphical library ggplot to do graphics.

#Pie chart of gender
ggplot(data, aes(x=factor(1), fill = factor(Gender)))+geom_bar(width=1)+coord_polar(theta="y")+theme_void()+labs(title="Gender")

There are 122 females and 61 males.

#Histogram of Age
qplot(data$Age, geom="histogram")+labs(title="Histogram for Age") +
  labs(x="Age", y="Count")
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.

Age distribution is slanted, there are a lot more people in their 20’s than other people. Age groups cannot really be compared directly because the amount of people are so different.

#Boxplot of gender and points

ggplot(data, aes(x=Gender, y=Points, fill=Gender)) + 
  geom_boxplot()+labs(title="Gender and amount of points")

It looks like Males have slightly more points by average than females (black bar in the middle is average), but the difference might not be significant. Also there’s a lot more females (122) than males so that might affect this.

Points and Attitude:

#Linear regression of Attitude and Points
ggplot(data, aes(x = Attitude, y = Points)) + geom_point() + geom_smooth(method="lm")+labs(title="Linear regression of attitude and points")

cor.test(data$Attitude, data$Points, method="pearson")
## 
##  Pearson's product-moment correlation
## 
## data:  data$Attitude and data$Points
## t = 4.8513, df = 181, p-value = 2.635e-06
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
##  0.2042082 0.4615619
## sample estimates:
##       cor 
## 0.3392167

Test of simple linear regression. Pearson correlation is significant and low positive, 0.339. (Pearson correlation assumes normalcy, not sure if it applies here)

Scatter plot matrix:

library(GGally)
p <- ggpairs(data, mapping = aes(col=Gender, alpha=0.3), lower = list(combo = wrap("facethist", bins = 20)))
p

Compilation of all sorts of different statistics.

Task 3 & 4

Task 3

Target: Points

linear <- lm(Points ~ Gender+Age+Attitude, data=data)
summary(linear)
## 
## Call:
## lm(formula = Points ~ Gender + Age + Attitude, data = data)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -25.153  -2.520   1.716   5.411  13.022 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 10.23020    3.39239   3.016  0.00294 ** 
## GenderM     -0.39852    1.35699  -0.294  0.76934    
## Age         -0.08766    0.07944  -1.103  0.27133    
## Attitude     0.40862    0.08707   4.693 5.33e-06 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 8.196 on 179 degrees of freedom
## Multiple R-squared:  0.1218, Adjusted R-squared:  0.1071 
## F-statistic: 8.278 on 3 and 179 DF,  p-value: 3.465e-05

This correlation is not significant for gender and age, but is for attitude. I will keep Attitude and remove Gender, Age.

summary(lm(Points~Attitude+Stra+Surf, data=data))
## 
## Call:
## lm(formula = Points ~ Attitude + Stra + Surf, data = data)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -25.151  -3.212   2.233   5.257  13.694 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  4.77806    5.15767   0.926   0.3555    
## Attitude     0.37570    0.08308   4.522 1.11e-05 ***
## Stra         1.89680    0.78318   2.422   0.0164 *  
## Surf        -0.62623    1.14945  -0.545   0.5866    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 8.082 on 179 degrees of freedom
## Multiple R-squared:  0.146,  Adjusted R-squared:  0.1317 
## F-statistic:  10.2 on 3 and 179 DF,  p-value: 3.089e-06

Linear regression is significant for Attitude and Stra, but not Surf. Remove Surf.

Task 4

summary(lm(Points~Attitude+Stra+Deep, data=data))
## 
## Call:
## lm(formula = Points ~ Attitude + Stra + Deep, data = data)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -23.499  -2.557   1.755   5.097  15.420 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  9.69651    4.84553   2.001  0.04689 *  
## Attitude     0.40408    0.08178   4.941 1.78e-06 ***
## Stra         2.07504    0.77411   2.681  0.00804 ** 
## Deep        -2.19240    1.08585  -2.019  0.04497 *  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 7.998 on 179 degrees of freedom
## Multiple R-squared:  0.1637, Adjusted R-squared:  0.1496 
## F-statistic: 11.68 on 3 and 179 DF,  p-value: 5.036e-07

Attitude, Strategy, and trying to get a Deep understanding of the subject during studying explains exam results the best. Here multiple linear regression and all subregressions are significant. So best predictors of a good exam score is the overall attitude of the student and study habits that aim at: Seeking Meaning, Relating Ideas, Use of Evidence, Organized Studying, and Time Management.

R-squared is a statistical measure of how close the data are to the fitted regression line. If the data formed linearly on the regression line, R-squared would be 1 or close to 1. Multiple R-squared is low and that says that these variables do not explain the whole variation in exam points. There are other factors that affect the distribution/orientation of the data.

ggplot(data, aes(x=Points, y=Attitude+Stra+Deep))+geom_point()+geom_smooth(method="lm")+labs(title="Linear regression of  points and attitude+stra+deep")

In the picture there’s slight positive linear correlation between Points and Attitude+Stra+Deep, meaning higher scores for an individual in Attitude+Stra+Deep generally mean there’s also a higher Points. While Points correlates with Attitude+Stra+Deep, those variables do not explain all variation in Points. Other factors that could affect points might be how familiar the student is with the course material previously and how much absolute time they spent on studying.

Task 5

linear <- lm(Points~Attitude+Stra+Deep, data=data)
par(mfrow = c(2,2))
plot(linear, which = c(1,2,5))

Q-Q Plot shows that the left tail of the distribution does not follow normalcy. It deviates a lot. I assume left tail contains all the “0 points” people, who answered the questions but possibly didn’t show up for the exam (0 points.) These people should be in a real analysis of the data.

There is also high spread in residuals, but they look mostly grouped linearly.

There is one higher leverage outlier in the data at 0.10 Leverage, while others are mostly between 0.0-0.04.


Chapter 3

Task 1-2

setwd("~/GitHub/IODS-project/data")
alc <- read.table("~/GitHub/IODS-project/data/alc.csv", header = TRUE, sep = ",")
names(alc)
##  [1] "school"     "sex"        "age"        "address"    "famsize"   
##  [6] "Pstatus"    "Medu"       "Fedu"       "Mjob"       "Fjob"      
## [11] "reason"     "nursery"    "internet"   "guardian"   "traveltime"
## [16] "studytime"  "failures"   "schoolsup"  "famsup"     "paid"      
## [21] "activities" "higher"     "romantic"   "famrel"     "freetime"  
## [26] "goout"      "Dalc"       "Walc"       "health"     "absences"  
## [31] "G1"         "G2"         "G3"         "alc_use"    "high_use"

This is a dataset made from two Portuguese school data attributes include descriptors of students (like age, sex, family size, profession of mom and dad, grades etc) and alcohol usage (Dalc = workday alcohol consumption, Walc = weekend alcohol consumption). Alc_use combines Dalc and Walk, and high_use is alcohol usage that’s higher than 2 (1- very low, 5- very high).

Task 3-4

Age

Legal drinking age in Portugal is 16 for beer/cider and 18 for hard liquor. I would hope that there are significantly more high alchohol users in students that are of legal drinking age.

library(ggplot2)
alc$legal <- ifelse(alc$age >= 16, TRUE, FALSE)
ggplot(data = alc, aes(x=high_use))+geom_bar()+facet_wrap("legal")+geom_text(stat='count', aes(label=..count..))

#Students under legal drinking age that are high users
18/(63+18)
## [1] 0.2222222
#Students over legal drinking age that are high users
96/(205+96)
## [1] 0.3189369

Here groupings are based on is the student legal drinking age (TRUE/FALSE). The graph shows that there are much more responders that are of legal drinking age. Percentage of students that are high users and under the legal drinking age is 22% and number of students that are high users and over the legal drinking age is 32%. There’s a noticable difference.

Go out

26 goout - going out with friends (numeric: from 1 - very low to 5 - very high)

Assumption: Kids that go out alot with their friends are more likely to drink than those who don’t.

ggplot(data = alc, aes(x=high_use))+geom_bar()+facet_wrap("goout")+geom_text(stat='count', aes(label=..count..))

ftable(table(alc$high_use, alc$goout))
##          1   2   3   4   5
##                           
## FALSE   19  84 103  41  21
## TRUE     3  16  23  40  32

Here it’s very obvious, kids that go out a lot drink a lot more than kids that don’t go out much.

Grades

Assumption: Kids who get more than half the points in every grade (First period, second period, final grade) are less likely to drink a lot than kids than don’t even manage half the points.

alc$grades <- ifelse(alc$G1 >= 11 & alc$G2 >= 11 & alc$G3 >=11, TRUE, FALSE)
ggplot(data = alc, aes(x=high_use))+geom_bar()+facet_wrap("grades")+geom_text(stat='count', aes(label=..count..))

#false
66/(66+118)
## [1] 0.3586957
#true
48/(48+150)
## [1] 0.2424242

This looks like it might be significant, kids that get more than half the points are 24% high alcohol users and kids that don’t are 35.9% high alchohol users.

Studytime

Assumption: Kids who spend more time studying drink less.

ggplot(data = alc, aes(x=high_use))+geom_bar()+facet_wrap("studytime")+geom_text(stat='count', aes(label=..count..))

ftable(table(alc$high_use, alc$studytime))
##          1   2   3   4
##                       
## FALSE   58 135  52  23
## TRUE    42  60   8   4
#1 < 2hours
42/(42+58)
## [1] 0.42
#2 2 to 5 hours
60/(60+135)
## [1] 0.3076923
#3 5 to 10 hours
8/(8+52)
## [1] 0.1333333
#4 over 10 hours
4/(4+23)
## [1] 0.1481481

Here it’s pretty clear, kids that study more than 5 hours a week are much less likely to be drinkers (14.8%, 13.3%) and kids who study less are much more likely to be heavy drinkers ( 30.7% and 42%). Especially if the kids spend almost no time studying (<2).

Task 5

Age

m <- glm(high_use ~ legal, data = alc, family = "binomial")
summary(m)
## 
## Call:
## glm(formula = high_use ~ legal, family = "binomial", data = alc)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -0.8765  -0.8765  -0.8765   1.5118   1.7344  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)    
## (Intercept)  -1.2528     0.2673  -4.687 2.77e-06 ***
## legalTRUE     0.4941     0.2945   1.678   0.0934 .  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 465.68  on 381  degrees of freedom
## Residual deviance: 462.70  on 380  degrees of freedom
## AIC: 466.7
## 
## Number of Fisher Scoring iterations: 4
coef(m)
## (Intercept)   legalTRUE 
##  -1.2527630   0.4941012
confint(m)
## Waiting for profiling to be done...
##                   2.5 %     97.5 %
## (Intercept) -1.80548222 -0.7512117
## legalTRUE   -0.06548363  1.0947388

Age is not significant factor in 95% statistical significance, but would be 90% statistical significance.

Go out

m <- glm(high_use ~ goout, data = alc, family = "binomial")
summary(m)
## 
## Call:
## glm(formula = high_use ~ goout, family = "binomial", data = alc)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -1.3721  -0.7672  -0.5450   0.9945   2.3082  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)    
## (Intercept)  -3.3512     0.4151  -8.073 6.84e-16 ***
## goout         0.7596     0.1157   6.564 5.23e-11 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 465.68  on 381  degrees of freedom
## Residual deviance: 415.68  on 380  degrees of freedom
## AIC: 419.68
## 
## Number of Fisher Scoring iterations: 4
coef(m)
## (Intercept)       goout 
##  -3.3511801   0.7596076
confint(m)
## Waiting for profiling to be done...
##                  2.5 %     97.5 %
## (Intercept) -4.1958512 -2.5655359
## goout        0.5385448  0.9931232

How often kids go “out” with their friends is significant factor that explains high alcohol use.

Grades

m <- glm(high_use ~ grades, data = alc, family = "binomial")
summary(m)
## 
## Call:
## glm(formula = high_use ~ grades, family = "binomial", data = alc)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -0.9426  -0.9426  -0.7452   1.4320   1.6835  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)    
## (Intercept)  -0.5810     0.1537   -3.78 0.000157 ***
## gradesTRUE   -0.5584     0.2261   -2.47 0.013526 *  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 465.68  on 381  degrees of freedom
## Residual deviance: 459.51  on 380  degrees of freedom
## AIC: 463.51
## 
## Number of Fisher Scoring iterations: 4
coef(m)
## (Intercept)  gradesTRUE 
##  -0.5810299  -0.5584044
confint(m)
## Waiting for profiling to be done...
##                  2.5 %     97.5 %
## (Intercept) -0.8871701 -0.2834717
## gradesTRUE  -1.0051883 -0.1174594

Bad grades mean kid is more likely to be a high alcohol user and higher grades mean low chance.

Studytime

m <- glm(high_use ~ studytime, data = alc, family = "binomial")
summary(m)
## 
## Call:
## glm(formula = high_use ~ studytime, family = "binomial", data = alc)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -1.0603  -0.8314  -0.8314   1.2993   2.1010  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)    
## (Intercept)   0.3209     0.3076   1.043    0.297    
## studytime    -0.6029     0.1530  -3.941  8.1e-05 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 465.68  on 381  degrees of freedom
## Residual deviance: 448.31  on 380  degrees of freedom
## AIC: 452.31
## 
## Number of Fisher Scoring iterations: 4
coef(m)
## (Intercept)   studytime 
##   0.3209036  -0.6028561
confint(m)
## Waiting for profiling to be done...
##                  2.5 %     97.5 %
## (Intercept) -0.2772274  0.9308876
## studytime   -0.9123998 -0.3115583

If a kid spends a lot tipe studying they are less likely to be high alchohol users.

Exersize 6

Go out, Grades, STudytime

library(dplyr)
## 
## Attaching package: 'dplyr'
## The following object is masked from 'package:GGally':
## 
##     nasa
## The following objects are masked from 'package:stats':
## 
##     filter, lag
## The following objects are masked from 'package:base':
## 
##     intersect, setdiff, setequal, union
m <- glm(high_use ~ studytime+grades+goout, data = alc, family = "binomial")
summary(m)
## 
## Call:
## glm(formula = high_use ~ studytime + grades + goout, family = "binomial", 
##     data = alc)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -1.6683  -0.8015  -0.5478   0.9613   2.5753  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)    
## (Intercept)  -1.9907     0.5184  -3.840 0.000123 ***
## studytime    -0.5737     0.1659  -3.459 0.000543 ***
## gradesTRUE   -0.3014     0.2485  -1.213 0.225280    
## goout         0.7340     0.1171   6.269 3.64e-10 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 465.68  on 381  degrees of freedom
## Residual deviance: 399.34  on 378  degrees of freedom
## AIC: 407.34
## 
## Number of Fisher Scoring iterations: 4
coef(m)
## (Intercept)   studytime  gradesTRUE       goout 
##  -1.9906826  -0.5736953  -0.3013601   0.7340343
confint(m)
## Waiting for profiling to be done...
##                  2.5 %     97.5 %
## (Intercept) -3.0311038 -0.9938476
## studytime   -0.9097114 -0.2575669
## gradesTRUE  -0.7897173  0.1864329
## goout        0.5103598  0.9703992
p <- predict(m, type="response")
# add the predicted probabilities to 'alc'
alc <- mutate(alc, probability = p)

# use the probabilities to make a prediction of high_use
alc <- mutate(alc, prediction = probability > 0.5)

table(high_use = alc$high_use, prediction = alc$prediction) %>% prop.table() %>% addmargins()
##         prediction
## high_use      FALSE       TRUE        Sum
##    FALSE 0.64659686 0.05497382 0.70157068
##    TRUE  0.19109948 0.10732984 0.29842932
##    Sum   0.83769634 0.16230366 1.00000000
g <- ggplot(alc, aes(x = high_use, y = probability, col=prediction))

g + geom_point()

# define a loss function (mean prediction error)
loss_func <- function(class, prob) {
  n_wrong <- abs(class - prob) > 0.5
  mean(n_wrong)
}

# call loss_func to compute the average number of wrong predictions in the (training) data
loss_func(class = alc$high_use, prob = alc$probability)
## [1] 0.2460733

There’s a 25% chance of a wrong prediction, so I don’t think my variables are a excellent predictor. It would predict “mostly”, but there’s such a big chance of error it’s not good/usable. Also there’s probably a lot of “overlap” between go out and studytime and studytime and grades… Different predictors would have maybe captured more of the actual problem?

Exersize 7

library(boot)
cv <- cv.glm(data = alc, cost = loss_func, glmfit = m, K = 10)

# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2513089

Datacamp model was 0.26 and mine is 0.25… SO they are pretty much the same?

Trying to get a better model:

m <- glm(high_use ~ studytime+failures+goout+absences+sex, data = alc, family = "binomial")
summary(m)
## 
## Call:
## glm(formula = high_use ~ studytime + failures + goout + absences + 
##     sex, family = "binomial", data = alc)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -2.0668  -0.7736  -0.5080   0.7663   2.4984  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -3.27654    0.59805  -5.479 4.28e-08 ***
## studytime   -0.38032    0.17414  -2.184 0.028966 *  
## failures     0.24195    0.21713   1.114 0.265141    
## goout        0.71219    0.12037   5.917 3.28e-09 ***
## absences     0.07748    0.02242   3.456 0.000548 ***
## sexM         0.76991    0.26620   2.892 0.003825 ** 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 465.68  on 381  degrees of freedom
## Residual deviance: 380.06  on 376  degrees of freedom
## AIC: 392.06
## 
## Number of Fisher Scoring iterations: 4
coef(m)
## (Intercept)   studytime    failures       goout    absences        sexM 
## -3.27653834 -0.38032241  0.24195066  0.71218501  0.07747656  0.76991496
confint(m)
## Waiting for profiling to be done...
##                   2.5 %      97.5 %
## (Intercept) -4.48345712 -2.13293535
## studytime   -0.73128369 -0.04615332
## failures    -0.18407800  0.67190587
## goout        0.48205204  0.95506124
## absences     0.03450819  0.12366371
## sexM         0.25194047  1.29791469
p <- predict(m, type="response")
# add the predicted probabilities to 'alc'
alc <- mutate(alc, probability = p)

# use the probabilities to make a prediction of high_use
alc <- mutate(alc, prediction = probability > 0.5)

table(high_use = alc$high_use, prediction = alc$prediction) %>% prop.table() %>% addmargins()
##         prediction
## high_use      FALSE       TRUE        Sum
##    FALSE 0.65968586 0.04188482 0.70157068
##    TRUE  0.17015707 0.12827225 0.29842932
##    Sum   0.82984293 0.17015707 1.00000000
g <- ggplot(alc, aes(x = high_use, y = probability, col=prediction))

g + geom_point()

# define a loss function (mean prediction error)
loss_func <- function(class, prob) {
  n_wrong <- abs(class - prob) > 0.5
  mean(n_wrong)
}

# call loss_func to compute the average number of wrong predictions in the (training) data
loss_func(class = alc$high_use, prob = alc$probability)
## [1] 0.2120419
cv <- cv.glm(data = alc, cost = loss_func, glmfit = m, K = 10)

# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2277487

With studytime+failures+goout+absences+sex it drops to 22.5… So maybe that’s good enough!


Chapter 4

Task 1-3

data("Boston")
# explore the dataset
str(Boston)
## 'data.frame':    506 obs. of  14 variables:
##  $ crim   : num  0.00632 0.02731 0.02729 0.03237 0.06905 ...
##  $ zn     : num  18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
##  $ indus  : num  2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
##  $ chas   : int  0 0 0 0 0 0 0 0 0 0 ...
##  $ nox    : num  0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
##  $ rm     : num  6.58 6.42 7.18 7 7.15 ...
##  $ age    : num  65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
##  $ dis    : num  4.09 4.97 4.97 6.06 6.06 ...
##  $ rad    : int  1 2 2 3 3 3 5 5 5 5 ...
##  $ tax    : num  296 242 242 222 222 222 311 311 311 311 ...
##  $ ptratio: num  15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
##  $ black  : num  397 397 393 395 397 ...
##  $ lstat  : num  4.98 9.14 4.03 2.94 5.33 ...
##  $ medv   : num  24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
dim(Boston)
## [1] 506  14
summary(Boston)
##       crim                zn             indus            chas        
##  Min.   : 0.00632   Min.   :  0.00   Min.   : 0.46   Min.   :0.00000  
##  1st Qu.: 0.08204   1st Qu.:  0.00   1st Qu.: 5.19   1st Qu.:0.00000  
##  Median : 0.25651   Median :  0.00   Median : 9.69   Median :0.00000  
##  Mean   : 3.61352   Mean   : 11.36   Mean   :11.14   Mean   :0.06917  
##  3rd Qu.: 3.67708   3rd Qu.: 12.50   3rd Qu.:18.10   3rd Qu.:0.00000  
##  Max.   :88.97620   Max.   :100.00   Max.   :27.74   Max.   :1.00000  
##       nox               rm             age              dis        
##  Min.   :0.3850   Min.   :3.561   Min.   :  2.90   Min.   : 1.130  
##  1st Qu.:0.4490   1st Qu.:5.886   1st Qu.: 45.02   1st Qu.: 2.100  
##  Median :0.5380   Median :6.208   Median : 77.50   Median : 3.207  
##  Mean   :0.5547   Mean   :6.285   Mean   : 68.57   Mean   : 3.795  
##  3rd Qu.:0.6240   3rd Qu.:6.623   3rd Qu.: 94.08   3rd Qu.: 5.188  
##  Max.   :0.8710   Max.   :8.780   Max.   :100.00   Max.   :12.127  
##       rad              tax           ptratio          black       
##  Min.   : 1.000   Min.   :187.0   Min.   :12.60   Min.   :  0.32  
##  1st Qu.: 4.000   1st Qu.:279.0   1st Qu.:17.40   1st Qu.:375.38  
##  Median : 5.000   Median :330.0   Median :19.05   Median :391.44  
##  Mean   : 9.549   Mean   :408.2   Mean   :18.46   Mean   :356.67  
##  3rd Qu.:24.000   3rd Qu.:666.0   3rd Qu.:20.20   3rd Qu.:396.23  
##  Max.   :24.000   Max.   :711.0   Max.   :22.00   Max.   :396.90  
##      lstat            medv      
##  Min.   : 1.73   Min.   : 5.00  
##  1st Qu.: 6.95   1st Qu.:17.02  
##  Median :11.36   Median :21.20  
##  Mean   :12.65   Mean   :22.53  
##  3rd Qu.:16.95   3rd Qu.:25.00  
##  Max.   :37.97   Max.   :50.00
# calculate the correlation matrix and round it
cor_matrix<-cor(Boston) %>% round(digits = 2)

# print the correlation matrix
cor_matrix
##          crim    zn indus  chas   nox    rm   age   dis   rad   tax
## crim     1.00 -0.20  0.41 -0.06  0.42 -0.22  0.35 -0.38  0.63  0.58
## zn      -0.20  1.00 -0.53 -0.04 -0.52  0.31 -0.57  0.66 -0.31 -0.31
## indus    0.41 -0.53  1.00  0.06  0.76 -0.39  0.64 -0.71  0.60  0.72
## chas    -0.06 -0.04  0.06  1.00  0.09  0.09  0.09 -0.10 -0.01 -0.04
## nox      0.42 -0.52  0.76  0.09  1.00 -0.30  0.73 -0.77  0.61  0.67
## rm      -0.22  0.31 -0.39  0.09 -0.30  1.00 -0.24  0.21 -0.21 -0.29
## age      0.35 -0.57  0.64  0.09  0.73 -0.24  1.00 -0.75  0.46  0.51
## dis     -0.38  0.66 -0.71 -0.10 -0.77  0.21 -0.75  1.00 -0.49 -0.53
## rad      0.63 -0.31  0.60 -0.01  0.61 -0.21  0.46 -0.49  1.00  0.91
## tax      0.58 -0.31  0.72 -0.04  0.67 -0.29  0.51 -0.53  0.91  1.00
## ptratio  0.29 -0.39  0.38 -0.12  0.19 -0.36  0.26 -0.23  0.46  0.46
## black   -0.39  0.18 -0.36  0.05 -0.38  0.13 -0.27  0.29 -0.44 -0.44
## lstat    0.46 -0.41  0.60 -0.05  0.59 -0.61  0.60 -0.50  0.49  0.54
## medv    -0.39  0.36 -0.48  0.18 -0.43  0.70 -0.38  0.25 -0.38 -0.47
##         ptratio black lstat  medv
## crim       0.29 -0.39  0.46 -0.39
## zn        -0.39  0.18 -0.41  0.36
## indus      0.38 -0.36  0.60 -0.48
## chas      -0.12  0.05 -0.05  0.18
## nox        0.19 -0.38  0.59 -0.43
## rm        -0.36  0.13 -0.61  0.70
## age        0.26 -0.27  0.60 -0.38
## dis       -0.23  0.29 -0.50  0.25
## rad        0.46 -0.44  0.49 -0.38
## tax        0.46 -0.44  0.54 -0.47
## ptratio    1.00 -0.18  0.37 -0.51
## black     -0.18  1.00 -0.37  0.33
## lstat      0.37 -0.37  1.00 -0.74
## medv      -0.51  0.33 -0.74  1.00
# visualize the correlation matrix
corrplot(cor_matrix, method="circle", type="upper", cl.pos="b", tl.pos="d", tl.cex = 0.6)

Boston Dataset describes Housing Values in Suburbs of Boston. Here positive correlations are displayed in blue and negative correlations in red color. Color intensity and the size of the circle are proportional to the correlation coefficients.

f.ex. Age is strongly negatively correlated with weighted mean of distances to five Boston employment centres (dis). And index of accessibility to radial highways (rad) is strongly positively correlated with property tax. Number of rooms is positively corrrelated with median value of homes, whereas lower status of the population is strongly negatively correlated.

Task 4

# center and standardize variables
boston_scaled <- scale(Boston)

# summaries of the scaled variables
summary(boston_scaled)
##       crim                 zn               indus        
##  Min.   :-0.419367   Min.   :-0.48724   Min.   :-1.5563  
##  1st Qu.:-0.410563   1st Qu.:-0.48724   1st Qu.:-0.8668  
##  Median :-0.390280   Median :-0.48724   Median :-0.2109  
##  Mean   : 0.000000   Mean   : 0.00000   Mean   : 0.0000  
##  3rd Qu.: 0.007389   3rd Qu.: 0.04872   3rd Qu.: 1.0150  
##  Max.   : 9.924110   Max.   : 3.80047   Max.   : 2.4202  
##       chas              nox                rm               age         
##  Min.   :-0.2723   Min.   :-1.4644   Min.   :-3.8764   Min.   :-2.3331  
##  1st Qu.:-0.2723   1st Qu.:-0.9121   1st Qu.:-0.5681   1st Qu.:-0.8366  
##  Median :-0.2723   Median :-0.1441   Median :-0.1084   Median : 0.3171  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.:-0.2723   3rd Qu.: 0.5981   3rd Qu.: 0.4823   3rd Qu.: 0.9059  
##  Max.   : 3.6648   Max.   : 2.7296   Max.   : 3.5515   Max.   : 1.1164  
##       dis               rad               tax             ptratio       
##  Min.   :-1.2658   Min.   :-0.9819   Min.   :-1.3127   Min.   :-2.7047  
##  1st Qu.:-0.8049   1st Qu.:-0.6373   1st Qu.:-0.7668   1st Qu.:-0.4876  
##  Median :-0.2790   Median :-0.5225   Median :-0.4642   Median : 0.2746  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.6617   3rd Qu.: 1.6596   3rd Qu.: 1.5294   3rd Qu.: 0.8058  
##  Max.   : 3.9566   Max.   : 1.6596   Max.   : 1.7964   Max.   : 1.6372  
##      black             lstat              medv        
##  Min.   :-3.9033   Min.   :-1.5296   Min.   :-1.9063  
##  1st Qu.: 0.2049   1st Qu.:-0.7986   1st Qu.:-0.5989  
##  Median : 0.3808   Median :-0.1811   Median :-0.1449  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.4332   3rd Qu.: 0.6024   3rd Qu.: 0.2683  
##  Max.   : 0.4406   Max.   : 3.5453   Max.   : 2.9865
# class of the boston_scaled object
class(boston_scaled)
## [1] "matrix"
# change the object to data frame
boston_scaled <- as.data.frame(boston_scaled)

# summary of the scaled crime rate
summary(boston_scaled$crim)
##      Min.   1st Qu.    Median      Mean   3rd Qu.      Max. 
## -0.419367 -0.410563 -0.390280  0.000000  0.007389  9.924110
# create a quantile vector of crim and print it
bins <- quantile(boston_scaled$crim)
bins
##           0%          25%          50%          75%         100% 
## -0.419366929 -0.410563278 -0.390280295  0.007389247  9.924109610
# create a categorical variable 'crime'
crime <- cut(boston_scaled$crim, breaks = bins, include.lowest = TRUE, labels = c("low", "med_low", "med_high", "high"))

# look at the table of the new factor crime
table(crime)
## crime
##      low  med_low med_high     high 
##      127      126      126      127
# remove original crim from the dataset
boston_scaled <- dplyr::select(boston_scaled, -crim)

# add the new categorical value to scaled data
boston_scaled <- data.frame(boston_scaled, crime)

# number of rows in the Boston dataset 
n <- nrow(boston_scaled)

# choose randomly 80% of the rows
ind <- sample(n,  size = n * 0.8)

# create train set
train <- boston_scaled[ind,]

# create test set 
test <- boston_scaled[-ind,]

# save the correct classes from test data
correct_classes <- test$crime

# remove the crime variable from test data
test <- dplyr::select(test, -crime)

In the scaling we subtract the column means from the corresponding columns and divide the difference with standard deviation so the values become z-scores. This normalizes the data so now it will be normally distributed. For some multivariate techniques such as multidimensional scaling and cluster analysis, the concept of distance between the units in the data is often of considerable interest and importance. When the variables in a multivariate data set are on different scales, it makes more sense to calculate the distances after some form of standardization.

Task 5

# linear discriminant analysis
lda.fit <- lda(crime ~ ., data = train)

# print the lda.fit object
lda.fit
## Call:
## lda(crime ~ ., data = train)
## 
## Prior probabilities of groups:
##       low   med_low  med_high      high 
## 0.2623762 0.2425743 0.2425743 0.2524752 
## 
## Group means:
##                  zn      indus        chas        nox         rm
## low       0.9825152 -0.9301856 -0.12375925 -0.8892654  0.4522815
## med_low  -0.1024398 -0.2433300  0.04906687 -0.5403407 -0.1642136
## med_high -0.3629840  0.1462347  0.16959035  0.3180592  0.1697274
## high     -0.4872402  1.0171096 -0.04073494  1.0499666 -0.4123478
##                 age        dis        rad        tax     ptratio
## low      -0.9143905  0.8749344 -0.6839198 -0.7564057 -0.39430406
## med_low  -0.3431249  0.3000535 -0.5564696 -0.4850407 -0.02706526
## med_high  0.3802523 -0.3619787 -0.3748243 -0.3025580 -0.32117645
## high      0.7939443 -0.8561960  1.6382099  1.5141140  0.78087177
##                black       lstat         medv
## low       0.38188437 -0.78370134  0.549519292
## med_low   0.30909320 -0.11953033  0.008970193
## med_high  0.08745433 -0.06803167  0.265262047
## high     -0.84154179  0.87594429 -0.623220731
## 
## Coefficients of linear discriminants:
##                 LD1         LD2         LD3
## zn       0.05351322  0.82777610 -0.75282575
## indus    0.02935518 -0.28843358  0.31226907
## chas    -0.04586880 -0.08926528  0.13524394
## nox      0.45697957 -0.63888934 -1.35507565
## rm      -0.08005939 -0.09836615 -0.17219124
## age      0.26655912 -0.29035923 -0.28472946
## dis      0.01378977 -0.33552896  0.05378444
## rad      2.93439655  0.96883696 -0.06386802
## tax      0.02655176 -0.09749300  0.48017985
## ptratio  0.08537027  0.10060183 -0.13474229
## black   -0.13256098 -0.01815116  0.10695170
## lstat    0.23483747 -0.24314046  0.48131455
## medv     0.21457481 -0.36302591 -0.17069365
## 
## Proportion of trace:
##    LD1    LD2    LD3 
## 0.9461 0.0404 0.0135
# the function for lda biplot arrows
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "orange", tex = 0.75, choices = c(1,2)){
  heads <- coef(x)
  arrows(x0 = 0, y0 = 0, 
         x1 = myscale * heads[,choices[1]], 
         y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
  text(myscale * heads[,choices], labels = row.names(heads), 
       cex = tex, col=color, pos=3)
}

# target classes as numeric
classes <- as.numeric(train$crime)

# plot the lda results
plot(lda.fit, dimen = 2, col = classes, pch = classes)
lda.arrows(lda.fit, myscale = 1)

Here Linear Discriminant 1 explains most of the between group variance.

rad: index of accessibility to radial highways is the most significant factor that predicts higher crime rate.

Task 6

# predict classes with test data
lda.pred <- predict(lda.fit, newdata = test)

# cross tabulate the results
table(correct = correct_classes, predicted = lda.pred$class)
##           predicted
## correct    low med_low med_high high
##   low       10       9        2    0
##   med_low    5      21        2    0
##   med_high   0      10       18    0
##   high       0       0        0   25

This prediction is very good at predicting high crime rates, but worse at predicting med_low or med_high correctly. ’ ##Task 7

data("Boston")
new_boston <- scale(Boston)
# k-means clustering
km <-kmeans(new_boston, centers = 4)

# plot the Boston dataset with clusters
pairs(new_boston, col = km$cluster)

#K-means might produce different results every time, because it randomly assigns the initial cluster centers. The function set.seed() can be used to deal with that.
set.seed(123)

# determine the number of clusters
k_max <- 10

# calculate the total within sum of squares
twcss <- sapply(1:k_max, function(k){kmeans(new_boston, k)$tot.withinss})

# visualize the results
qplot(x = 1:k_max, y = twcss, geom = 'line')

# k-means clustering
km <-kmeans(new_boston, centers = 2)

# plot the Boston dataset with clusters
pairs(new_boston, col = km$cluster)

One way to check optimal amount of clusters is to look at the total within cluster sum of squares (twcss). When twcss drops a lot the optimal number of clusters is found. Here twcss drops around 2. In the LDA there were two “main clusters” one with high crime and some points from med_high and the other larger cluster was the rest of the data points. This k-means clustering on this data also works best with 2 clusters.

Bonuses

km <-kmeans(new_boston, centers = 3)

# linear discriminant analysis
lda.fit <- lda(crime ~ ., data = train)

# print the lda.fit object
lda.fit
## Call:
## lda(crime ~ ., data = train)
## 
## Prior probabilities of groups:
##       low   med_low  med_high      high 
## 0.2623762 0.2425743 0.2425743 0.2524752 
## 
## Group means:
##                  zn      indus        chas        nox         rm
## low       0.9825152 -0.9301856 -0.12375925 -0.8892654  0.4522815
## med_low  -0.1024398 -0.2433300  0.04906687 -0.5403407 -0.1642136
## med_high -0.3629840  0.1462347  0.16959035  0.3180592  0.1697274
## high     -0.4872402  1.0171096 -0.04073494  1.0499666 -0.4123478
##                 age        dis        rad        tax     ptratio
## low      -0.9143905  0.8749344 -0.6839198 -0.7564057 -0.39430406
## med_low  -0.3431249  0.3000535 -0.5564696 -0.4850407 -0.02706526
## med_high  0.3802523 -0.3619787 -0.3748243 -0.3025580 -0.32117645
## high      0.7939443 -0.8561960  1.6382099  1.5141140  0.78087177
##                black       lstat         medv
## low       0.38188437 -0.78370134  0.549519292
## med_low   0.30909320 -0.11953033  0.008970193
## med_high  0.08745433 -0.06803167  0.265262047
## high     -0.84154179  0.87594429 -0.623220731
## 
## Coefficients of linear discriminants:
##                 LD1         LD2         LD3
## zn       0.05351322  0.82777610 -0.75282575
## indus    0.02935518 -0.28843358  0.31226907
## chas    -0.04586880 -0.08926528  0.13524394
## nox      0.45697957 -0.63888934 -1.35507565
## rm      -0.08005939 -0.09836615 -0.17219124
## age      0.26655912 -0.29035923 -0.28472946
## dis      0.01378977 -0.33552896  0.05378444
## rad      2.93439655  0.96883696 -0.06386802
## tax      0.02655176 -0.09749300  0.48017985
## ptratio  0.08537027  0.10060183 -0.13474229
## black   -0.13256098 -0.01815116  0.10695170
## lstat    0.23483747 -0.24314046  0.48131455
## medv     0.21457481 -0.36302591 -0.17069365
## 
## Proportion of trace:
##    LD1    LD2    LD3 
## 0.9461 0.0404 0.0135
# the function for lda biplot arrows
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "orange", tex = 0.75, choices = c(1,2)){
  heads <- coef(x)
  arrows(x0 = 0, y0 = 0, 
         x1 = myscale * heads[,choices[1]], 
         y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
  text(myscale * heads[,choices], labels = row.names(heads), 
       cex = tex, col=color, pos=3)
}

# target classes as numeric
classes <- as.numeric(km$cluster)

# plot the lda results
plot(lda.fit, dimen = 2, col = classes, pch = classes)
lda.arrows(lda.fit, myscale = 1)

rad is the most influencial linear separator for the clusters. It looks like the k-means clustering with 4 clusters makes clusters where low, med_low, med_high, high etc. mix a lot so the clustering is not perfect.

library(plotly)
## 
## Attaching package: 'plotly'
## The following object is masked from 'package:MASS':
## 
##     select
## The following object is masked from 'package:ggplot2':
## 
##     last_plot
## The following object is masked from 'package:stats':
## 
##     filter
## The following object is masked from 'package:graphics':
## 
##     layout
model_predictors <- dplyr::select(train, -crime)

# check the dimensions
dim(model_predictors)
## [1] 404  13
dim(lda.fit$scaling)
## [1] 13  3
# matrix multiplication
matrix_product <- as.matrix(model_predictors) %*% lda.fit$scaling
matrix_product <- as.data.frame(matrix_product)

plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', color= ~train$crime)
#plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', color=~classes)

Can’t get the color with km$cluster to work…


Chapter 5

Task 1

human <- read.table("~/GitHub/IODS-project/data/human.csv", header = TRUE, row.names= 1, sep = ",")

# visualize the 'human_' variables
ggpairs(human)

# compute the correlation matrix and visualize it with corrplot
cor(human) %>% corrplot()

summary(human)
##    edu2Ratio         LabRatio      LifeExpectancy      EduExp     
##  Min.   :0.1717   Min.   :0.1857   Min.   :49.00   Min.   : 5.40  
##  1st Qu.:0.7264   1st Qu.:0.5984   1st Qu.:66.30   1st Qu.:11.25  
##  Median :0.9375   Median :0.7535   Median :74.20   Median :13.50  
##  Mean   :0.8529   Mean   :0.7074   Mean   :71.65   Mean   :13.18  
##  3rd Qu.:0.9968   3rd Qu.:0.8535   3rd Qu.:77.25   3rd Qu.:15.20  
##  Max.   :1.4967   Max.   :1.0380   Max.   :83.50   Max.   :20.20  
##       GNI          MatMortality       AdoBirth       ParliamentF   
##  Min.   :   581   Min.   :   1.0   Min.   :  0.60   Min.   : 0.00  
##  1st Qu.:  4198   1st Qu.:  11.5   1st Qu.: 12.65   1st Qu.:12.40  
##  Median : 12040   Median :  49.0   Median : 33.60   Median :19.30  
##  Mean   : 17628   Mean   : 149.1   Mean   : 47.16   Mean   :20.91  
##  3rd Qu.: 24512   3rd Qu.: 190.0   3rd Qu.: 71.95   3rd Qu.:27.95  
##  Max.   :123124   Max.   :1100.0   Max.   :204.80   Max.   :57.50

In general female education correlates slightly to moderately positively with “good things” like Life Expectancy and negatively with “bad things” like Maternal Mortality and child pregnancy. Maternal MOrtality in general is very low, but overrepresented in some countries. Maternal Mortality in Europe is generally < 10, but in the some African countries it is very high like Sierra Leone (max, 1100) and Chad (980). Many traits show this same pattern where majority of Western Countries are grouped around the peak and there’s a second smaller “bump” where Developing Countries are. In general most of these trains look good (as in life expectancy is a right tilted distribution, Adolescent birth rate is a left tilted distribution) except parlamentary representation that is left tilted (women are underrepresented in most countries).

Task 2 : Principle Component Analysis

The original variables of data might contain too much information for representing the phenomenon of interest, so we have to reduce dimensions to “most essential dimensions”. We use “Principle component analysis” or PCA, it is an unsupervised method. PCA is sensitive to the relative scaling of the original features and assumes that features with larger viarance are more important that features with smaller variance. Data should be standardized before using PCA.

First Principle Component captures the maximum amount of variance from features in the original data.

Second Principle component is orthogonal to the first and captures maximum amount of variability left.

Not standardized:

#perform principal component analysis (with the SVD method)
pca_human <- prcomp(human)

# draw a biplot
biplot(pca_human, cex = c(0.8, 1), col = c("grey40", "deeppink2"))
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length
## = arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length
## = arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length
## = arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length
## = arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length
## = arrow.len): zero-length arrow is of indeterminate angle and so skipped

This does not work at all… Standardization is very important. There is not much that can be interpreted here and there’s many errors. Arrows don’t seem to work and the only explaining thing is that GNI explains left grouping and all other variables group right. This is not the same as in corrplot, where GNI has a positive relationship with Life Expectancy and Education Expected Years… This plot doesn’t work.

Task 3 & 4

Standardized

# standardize the variables
human_std <- scale(human)

# perform principal component analysis (with the SVD method)
pca_human <- prcomp(human_std)

# create and print out a summary of pca_human
s <- summary(pca_human)
s
## Importance of components:
##                           PC1    PC2     PC3     PC4     PC5     PC6
## Standard deviation     2.0708 1.1397 0.87505 0.77886 0.66196 0.53631
## Proportion of Variance 0.5361 0.1624 0.09571 0.07583 0.05477 0.03595
## Cumulative Proportion  0.5361 0.6984 0.79413 0.86996 0.92473 0.96069
##                            PC7     PC8
## Standard deviation     0.45900 0.32224
## Proportion of Variance 0.02634 0.01298
## Cumulative Proportion  0.98702 1.00000
# rounded percetanges of variance captured by each PC
pca_pr <- round(100*s$importance[2, ], digits = 1)

# print out the percentages of variance
pca_pr
##  PC1  PC2  PC3  PC4  PC5  PC6  PC7  PC8 
## 53.6 16.2  9.6  7.6  5.5  3.6  2.6  1.3
# create object pc_lab to be used as axis labels
pc_lab <- paste0(names(pca_pr), " (", pca_pr, "%)")

# draw a biplot
biplot(pca_human, cex = c(0.8, 1), col = c("grey40", "deeppink2"), xlab = pc_lab[1], ylab = pc_lab[2])

56.1+16.5
## [1] 72.6

After standardization we see a working plot where we can interpret relationships and groups. Correlations match the correlation seen in corrplot.

Biplot is a way of visualizing two representations of the same data, biplot shows 1) Observations in a lower dimensional representation (scatter plot x and y) and 2) original features and their relationships with each other and the principal components (arrows). Angle of arrows can be interpreted as the correlation between features so that: small angle = high positive correlation and angle between arrow and PC-axis can be interpreted as the correlation between the two (Small angle = high positive correlation). Length of the arrows is proportional to the standard deviation of the features.

Here we see that the principle components explain 72.6% of the variation, so a lot. We also see the same correlations for the corrplot that Maternal Mortality/Adolescent Birthrate group countries towards the right (Developing Countries) and Female Education Indexes and Life expectancy group Countries to the left (European and other Developed Nations). There is strong negative correlation between left and right grouping classes and we can see also that Adolescent Birth Rate nad Maternal mortality correlate closely with eachother… and the same thing is true for the variables to the left. Lab Ratio (Paricipation in Labor male/female ratio) and Parliament (female representation) are in charge of most of the PC2 and only explain 16.5% of the variation. Absolutely clear groups don’t really form and the majority of countries are middle and middle left, only some developing countries crealy separate towards the right.

Task 5 Multiple Correspondance Analysis.

Multiple Correspondance Analysis is also a dimensionality reduction method. It analyses the pattern of relationships between several categorical variables, but can also use continuous variables as supplementary variables. MCA use frequencies and can be used for text data.

For categorical variables: Indicator Matrix (binary) or Burt Matrix (two-way cross tabulatin of all data).

Eigenvalues are variances and percentage of variances retained by each dimension.

Individuals are usually the rows of the data, their coordinates, and contribution to the dimension.

Categories are the coordinates and contribution of the variable categories.

Categorical variables are the squared correlation between each variable and the dimensions.

library(FactoMineR)
library(tidyr)

data(tea)

keep_columns <- c("Tea", "How", "how", "sugar", "where", "lunch")

# select the 'keep_columns' to create a new dataset
tea_time <- select(tea, one_of(keep_columns))

# look at the summaries and structure of the data
summary(tea_time)
##         Tea         How                      how           sugar    
##  black    : 74   alone:195   tea bag           :170   No.sugar:155  
##  Earl Grey:193   lemon: 33   tea bag+unpackaged: 94   sugar   :145  
##  green    : 33   milk : 63   unpackaged        : 36                 
##                  other:  9                                          
##                   where           lunch    
##  chain store         :192   lunch    : 44  
##  chain store+tea shop: 78   Not.lunch:256  
##  tea shop            : 30                  
## 
str(tea_time)
## 'data.frame':    300 obs. of  6 variables:
##  $ Tea  : Factor w/ 3 levels "black","Earl Grey",..: 1 1 2 2 2 2 2 1 2 1 ...
##  $ How  : Factor w/ 4 levels "alone","lemon",..: 1 3 1 1 1 1 1 3 3 1 ...
##  $ how  : Factor w/ 3 levels "tea bag","tea bag+unpackaged",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ sugar: Factor w/ 2 levels "No.sugar","sugar": 2 1 1 2 1 1 1 1 1 1 ...
##  $ where: Factor w/ 3 levels "chain store",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ lunch: Factor w/ 2 levels "lunch","Not.lunch": 2 2 2 2 2 2 2 2 2 2 ...
dim(tea_time)
## [1] 300   6
# visualize the dataset
gather(tea_time) %>% ggplot(aes(value),theme(axis.text.x = element_text(angle = 45, hjust = 1, size = 8))) + facet_wrap("key", scales = "free") + geom_bar()
## Warning: attributes are not identical across measure variables;
## they will be dropped

300 tea consumers have answered a survey about their consumption of tea. The questions were about how they consume tea, how they think of tea and descriptive questions (sex, age, socio-professional category and sport practise). Except for the age, all the variables are categorical. The “how” answers to what type of tea people drink and “tea bag” seems most popular, the “How” answers to what people drink ther tea with and most commonly alone. People rarely drink tea at lunch. People put or don’t put sugar in their tea 50/50. Most people drink Earl Grey, and mostly people drink their tea at a chain store.

MCA

# multiple correspondence analysis
mca <- MCA(tea_time, graph = FALSE)

# summary of the model
summary(mca)
## 
## Call:
## MCA(X = tea_time, graph = FALSE) 
## 
## 
## Eigenvalues
##                        Dim.1   Dim.2   Dim.3   Dim.4   Dim.5   Dim.6
## Variance               0.279   0.261   0.219   0.189   0.177   0.156
## % of var.             15.238  14.232  11.964  10.333   9.667   8.519
## Cumulative % of var.  15.238  29.471  41.435  51.768  61.434  69.953
##                        Dim.7   Dim.8   Dim.9  Dim.10  Dim.11
## Variance               0.144   0.141   0.117   0.087   0.062
## % of var.              7.841   7.705   6.392   4.724   3.385
## Cumulative % of var.  77.794  85.500  91.891  96.615 100.000
## 
## Individuals (the 10 first)
##                       Dim.1    ctr   cos2    Dim.2    ctr   cos2    Dim.3
## 1                  | -0.298  0.106  0.086 | -0.328  0.137  0.105 | -0.327
## 2                  | -0.237  0.067  0.036 | -0.136  0.024  0.012 | -0.695
## 3                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 4                  | -0.530  0.335  0.460 | -0.318  0.129  0.166 |  0.211
## 5                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 6                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 7                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 8                  | -0.237  0.067  0.036 | -0.136  0.024  0.012 | -0.695
## 9                  |  0.143  0.024  0.012 |  0.871  0.969  0.435 | -0.067
## 10                 |  0.476  0.271  0.140 |  0.687  0.604  0.291 | -0.650
##                       ctr   cos2  
## 1                   0.163  0.104 |
## 2                   0.735  0.314 |
## 3                   0.062  0.069 |
## 4                   0.068  0.073 |
## 5                   0.062  0.069 |
## 6                   0.062  0.069 |
## 7                   0.062  0.069 |
## 8                   0.735  0.314 |
## 9                   0.007  0.003 |
## 10                  0.643  0.261 |
## 
## Categories (the 10 first)
##                        Dim.1     ctr    cos2  v.test     Dim.2     ctr
## black              |   0.473   3.288   0.073   4.677 |   0.094   0.139
## Earl Grey          |  -0.264   2.680   0.126  -6.137 |   0.123   0.626
## green              |   0.486   1.547   0.029   2.952 |  -0.933   6.111
## alone              |  -0.018   0.012   0.001  -0.418 |  -0.262   2.841
## lemon              |   0.669   2.938   0.055   4.068 |   0.531   1.979
## milk               |  -0.337   1.420   0.030  -3.002 |   0.272   0.990
## other              |   0.288   0.148   0.003   0.876 |   1.820   6.347
## tea bag            |  -0.608  12.499   0.483 -12.023 |  -0.351   4.459
## tea bag+unpackaged |   0.350   2.289   0.056   4.088 |   1.024  20.968
## unpackaged         |   1.958  27.432   0.523  12.499 |  -1.015   7.898
##                       cos2  v.test     Dim.3     ctr    cos2  v.test  
## black                0.003   0.929 |  -1.081  21.888   0.382 -10.692 |
## Earl Grey            0.027   2.867 |   0.433   9.160   0.338  10.053 |
## green                0.107  -5.669 |  -0.108   0.098   0.001  -0.659 |
## alone                0.127  -6.164 |  -0.113   0.627   0.024  -2.655 |
## lemon                0.035   3.226 |   1.329  14.771   0.218   8.081 |
## milk                 0.020   2.422 |   0.013   0.003   0.000   0.116 |
## other                0.102   5.534 |  -2.524  14.526   0.197  -7.676 |
## tea bag              0.161  -6.941 |  -0.065   0.183   0.006  -1.287 |
## tea bag+unpackaged   0.478  11.956 |   0.019   0.009   0.000   0.226 |
## unpackaged           0.141  -6.482 |   0.257   0.602   0.009   1.640 |
## 
## Categorical variables (eta2)
##                      Dim.1 Dim.2 Dim.3  
## Tea                | 0.126 0.108 0.410 |
## How                | 0.076 0.190 0.394 |
## how                | 0.708 0.522 0.010 |
## sugar              | 0.065 0.001 0.336 |
## where              | 0.702 0.681 0.055 |
## lunch              | 0.000 0.064 0.111 |
# visualize MCA
plot(mca, invisible=c("ind"), habillage="quali")

With the MCA we can see that some things group together. People that drink tea at a tea shop drink unpackaged tea and are more likely to drink green tea than others, where as people that drink tea in a chain store are most likely to drink it in a tea bag. Things that were 50/50 don’t part from the middle of the MCA plot, so sugar/no sugar are in the middle. Also because almost everyone had their drink “Not lunch” it doesn’t group anywhere…

Other plots

plotellipses(mca)

How the chosen columns spread out in the MCA.Here it’s also visible that teashop+unpackaged group together. Also it’s easier to see that it looks like that people that drink unpackaged tea in teashops mostly drink tea alone or with lemon. Less with milk and none or almost none (hard to see) with “other”. Also this group is maybe mostly drinking “not at lunch”.